Skip to content

Fix Flux2 non-diffusers guidance LoRA conversion#13486

Merged
sayakpaul merged 5 commits intohuggingface:mainfrom
yadferhad:oss/13484-lora-loading-on-text-pipe-for-modular-pipeline
Apr 18, 2026
Merged

Fix Flux2 non-diffusers guidance LoRA conversion#13486
sayakpaul merged 5 commits intohuggingface:mainfrom
yadferhad:oss/13484-lora-loading-on-text-pipe-for-modular-pipeline

Conversation

@yadferhad
Copy link
Copy Markdown
Contributor

Summary

  • map Flux2 guidance_in.* non-diffusers LoRA keys to the diffusers guidance embedder
  • raise a clear error when Flux2 LoRAs are loaded into the text-only modular sub-pipeline
  • add regression tests covering both behaviors

Root cause

The Flux2 non-diffusers conversion table handled time_in.* weights but omitted guidance_in.*, so those keys remained in the original checkpoint and caused conversion to fail. The text-only modular sub-pipeline also has no transformer, so the old loading path fell through to a misleading error instead of rejecting the call directly.

Validation

  • python -m ruff check src/diffusers/loaders/lora_conversion_utils.py src/diffusers/loaders/lora_pipeline.py tests/lora/test_lora_conversion_utils_flux2.py
  • python -m pytest tests/lora/test_lora_conversion_utils_flux2.py -q
  • Flux2LoraLoaderMixin.lora_state_dict("lovis93/Flux-2-Multi-Angles-LoRA-v2") now returns 340 converted weights and includes guidance_embedder keys

Fixes #13484

@github-actions github-actions Bot added lora tests size/M PR with diff < 200 LOC labels Apr 15, 2026
@yadferhad yadferhad marked this pull request as ready for review April 15, 2026 17:29
@github-actions github-actions Bot added size/M PR with diff < 200 LOC and removed size/M PR with diff < 200 LOC labels Apr 16, 2026
@yadferhad
Copy link
Copy Markdown
Contributor Author

I checked the extra issue context that was added after this PR went up and widened the fix accordingly.

Originally this PR only covered the missing guidance_in.* mapping plus the text-only sub-pipeline guard. After reviewing the added example, I updated the Flux2 non-diffusers conversion path to also accept expanded Flux2 block names directly, including the legacy sformer_blocks.* alias for double-stream blocks.

Validation on this branch:

  • python -m pytest tests/lora/test_lora_conversion_utils_flux2.py -q
  • Flux2LoraLoaderMixin.lora_state_dict("fal/FLUX.2-dev-Turbo")
  • explicit checks with both weight_name="flux.2-turbo-lora.safetensors" and weight_name="comfy/Flux_2-Turbo-LoRA_comfyui.safetensors"

One other note: the current Hub snapshot for fal/FLUX.2-dev-Turbo now appears to resolve both advertised weight paths to the same blob, so the sformer_blocks.* example looks like an older export/checkpoint variant. The updated normalization still covers that variant if it shows up again.

Copy link
Copy Markdown
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your PR!

Comment thread tests/lora/test_lora_conversion_utils_flux2.py Outdated
Comment thread src/diffusers/loaders/lora_pipeline.py Outdated
Comment thread src/diffusers/loaders/lora_pipeline.py Outdated
Comment thread src/diffusers/loaders/lora_conversion_utils.py Outdated
@github-actions github-actions Bot added size/M PR with diff < 200 LOC size/S PR with diff < 50 LOC and removed size/M PR with diff < 200 LOC tests labels Apr 17, 2026
@github-actions github-actions Bot added size/S PR with diff < 50 LOC and removed size/S PR with diff < 50 LOC size/M PR with diff < 200 LOC labels Apr 17, 2026
@sayakpaul
Copy link
Copy Markdown
Member

@claude could you do a review as well?

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 18, 2026

Claude finished @sayakpaul's task in 1m 38s —— View job


Review of PR #13486

  • Read repository review guidelines (AGENTS.md, models.md, modular.md)
  • Read the PR diff and surrounding code context
  • Verify mappings against Flux2 transformer model definitions
  • Post review feedback

Review Summary

Overall: Looks good. The changes are correct and well-scoped after the review round with @sayakpaul.

guidance_in.* mapping (lines 2438–2439)

The two new entries in extra_mappings:

"guidance_in.in_layer": "time_guidance_embed.guidance_embedder.linear_1",
"guidance_in.out_layer": "time_guidance_embed.guidance_embedder.linear_2",

are verified correct:

This mirrors the existing time_in.* mapping and is consistent with the analogous Flux1 converter which maps guidance_in.mlp.{0,2}time_text_embed.guidance_embedder.linear_{1,2} (with the expected Flux1 vs Flux2 naming differences).

Expanded block prefix handling (lines 2334–2346)

The logic correctly:

  1. Pops keys with expanded diffusers-style prefixes (single_transformer_blocks., transformer_blocks.) from original_state_dict into converted_state_dict, bypassing the single_blocks./double_blocks. conversion logic below
  2. Normalizes the sformer_blocks. alias → transformer_blocks.
  3. Uses list(original_state_dict.keys()) to avoid mutating-during-iteration
  4. Runs before the layer-counting loop, so num_double_layers/num_single_layers will be 0 for these checkpoints and the block-level conversion is correctly skipped

One minor observation (non-blocking): for a checkpoint that mixes expanded and compact block names (e.g., transformer_blocks.0.… alongside double_blocks.0.…), both code paths would fire and both would contribute to converted_state_dict. This seems unlikely in practice but could silently produce duplicate/conflicting keys. Not worth addressing now since it's a hypothetical edge case, but worth noting for future awareness.

No correctness issues found. LGTM.


@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@github-actions github-actions Bot added size/S PR with diff < 50 LOC and removed size/S PR with diff < 50 LOC labels Apr 18, 2026
@sayakpaul sayakpaul merged commit 77f8cf8 into huggingface:main Apr 18, 2026
13 of 14 checks passed
@sayakpaul
Copy link
Copy Markdown
Member

Thanks for the PR!

@yadferhad yadferhad deleted the oss/13484-lora-loading-on-text-pipe-for-modular-pipeline branch April 20, 2026 18:36
terarachang pushed a commit to terarachang/diffusers that referenced this pull request Apr 30, 2026
* Fix Flux2 LoRA guidance conversion

* Handle expanded Flux2 LoRA block names

* Address Flux2 PR review feedback

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lora size/S PR with diff < 50 LOC

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Lora loading fail on Flux 2 dev for FAL loras

3 participants